explainable AI

Terms from Artificial Intelligence: humans at the heart of algorithms

Page numbers are for draft copy at present; they will be replaced with correct numbers when final book is formatted. Chapter numbers are correct and will not change now.

Explainable AI is the term used to refer to the overall aim of making the decisions and outputs of AI more understandable, and to the number of technologies, that can help acheive this. Many forms of machine learning create large representations, such as the weights in neural networks, which are very difficult to understand. In addition, AI has been shown to exhibit bias and the effects of errors in AI can be life threatening, for example, in an autonomous vehicle. This has led to the desire for systems that are more transparent or comprehensible, and legislation, such as the European General Data Protection Regulation, that requires companies to provide explanations of critical decisions. This may be acheived by favouring machine learning techniques, such as decision trees, that are inherently more explainable. Alternatively various methods have been developed to make black-box models more transparent, often using peturbation techniques such as SHAP or LIME.

Defined on page 514

Used on Chap. 1: pages 4, 11; Chap. 9: page 178; Chap. 14: page 336; Chap. 18: pages 441, 451; Chap. 19: pages 482, 485; Chap. 20: page 493; Chap. 21: pages 513, 514, 515, 529; Chap. 23: pages 567, 572

Also known as explainable, explainability, XAI